-
Notifications
You must be signed in to change notification settings - Fork 417
Static invoice server #3628
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Static invoice server #3628
Conversation
1c6073b
to
4a287ea
Compare
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #3628 +/- ##
==========================================
- Coverage 88.89% 88.78% -0.11%
==========================================
Files 165 166 +1
Lines 118886 119308 +422
Branches 118886 119308 +422
==========================================
+ Hits 105680 105929 +249
- Misses 10889 11041 +152
- Partials 2317 2338 +21 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
4a287ea
to
93bd4d0
Compare
23f3276
to
706e277
Compare
In upcoming commits, we need to check whether a static invoice or its underlying offer is expired in no-std builds. Here we expose the methods to do so. The methods could instead be kept private to the crate, but they seem potentially useful.
706e277
to
f5a78e4
Compare
f5a78e4
to
2dc0bc6
Compare
Still need to write a handful of tests but this should be good for a look. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did a high level pass. Fairly straight-forward and less complicated than the client.
Two main points:
-
Persistence/rate limiting is now offloaded to the user of LDK and/or to LDK node. Perhaps code should be provided that takes a
KVStore
and makes it happen.Related: persistence for static invoices is event based, but for chain and channelmonitor there are the
Persist
andPersister
traits. Perhaps the same concept should be used? -
A lot of the 'refresh' logic is handled through expiry, but does that work if you can't control your peers?
pub fn is_offer_expired_no_std(&self, duration_since_epoch: Duration) -> bool { | ||
self.contents.is_offer_expired_no_std(duration_since_epoch) | ||
} | ||
|
||
#[allow(unused)] // TODO: remove this once we remove the `async_payments` cfg flag |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove cfg flag in this PR?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we don't, it might be worth to switch these TODOs to cfg_attr(async_payments, allow(unused))
, which would make sure we will remove them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can't remove the cfg flag until we have blinded path auth at a minimum (either via #3845 or re-adding the nonce and hmacs that were previously present in #3618). There may be other requirements as well, need to discuss with @TheBlueMatt to get on the same page with which follow-ups documented on #2298 are blocking.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can't the cfg flag be removed everywhere except for the points/gates where auth would be required?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we can't remove them right now, it might also be worth to switch the cfg(async_payments)
CI to cover the entire workspace, not only the lightning
crate, see #3904
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can't the cfg flag be removed everywhere except for the points/gates where auth would be required?
Maybe, but it does feel weird to publicly expose unusable APIs? Feels like it will be less work to expose them all in one go then surgically remove a few now IMO but we can go down that route if people feel strongly.
Aiming to take a look at solving #3904 tomorrow, thanks @tnull for catching that!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I still think it's better to have as few conditional compiles as possible. The onion message handlers and the public API only. Indeed to allow as many bugs as possible to be caught without a separate async_payments build.
Currently there are 78 #[cfg(async_payments)]
directives. I'd expect that 90% can be removed already now.
Also interested to know how much work re-adding the nonce and hmacs really is. If that's an hour work, I'd just do it to get rid of the dependency on another PR.
/// | ||
/// ## Usage | ||
/// 1. Static invoice server calls [`Self::blinded_paths_for_async_recipient`] | ||
/// 2. Static invoice server communicates the resulting paths out-of-band to the async recipient, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How does this out-of-band communication work in practice? Maybe a LSP proprietary comm channel to the LSP clients?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might indeed be interesting to explore an LSP spec extension protocol based on bLIP-50 to transmit the paths.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIUC, LSPs frequently need a way to communicate out-of-band anyway, such as transmitting intercept SCIDs for inclusion in a recipient's invoice. I discussed the out-of-band approach in this PR with the Lexe folks and they seemed comfortable with it. Not opposed to an LSP spec extension at all though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If that out of band channel exists anyway, it could have been used for storing the static invoices too?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe it is worth to think this through a bit more outside the context of this PR. What are the instructions/recommendations that we'd give users that want to run an LSP with this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Currently the instructions are largely in the new event docs, let me know if you think there's something missing. Of course pending the outcome of this discussion #3628 (comment)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was specifically thinking of out-of-band communication for transmitting paths to the static invoice server.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that'd be pretty specific to the LSP's architecture. Not sure if we want to go down the road of describing what websockets/REST calls are. Also have very limited platform-specific knowledge myself 😅
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am just wondering why we defined the recipient<->server protocol via onion messages when the LSP already has a communication channel with its clients? (lsp spec, websockets, or something else).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Basically it saves the LSP from exchanging several messages and managing the cache. Unless we build it, LDK users will need to do the offer paths <> persist_invoice <> invoice_persisted dance themselves. This reduces the dance to one one-way exchange and we deal with managing the offer cache, basically.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Took a first high-level look.
I agree with some of points Joost raised here. In particular, it might be worth exploring whether/how we can enforce rate-limiting across related contexts. In lightning-liquidity
/ for LSPS5 we now concluded that we'll enforce rate-limiting where possible, but also piggyback on top of the LSPS1/LSPS2 flows, which allow to authenticate peers via the user-provided token
field (that an LSP might or might not require).
/// | ||
/// ## Usage | ||
/// 1. Static invoice server calls [`Self::blinded_paths_for_async_recipient`] | ||
/// 2. Static invoice server communicates the resulting paths out-of-band to the async recipient, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might indeed be interesting to explore an LSP spec extension protocol based on bLIP-50 to transmit the paths.
/// [`ChannelManager::blinded_paths_for_async_recipient`]: crate::ln::channelmanager::ChannelManager::blinded_paths_for_async_recipient | ||
/// [`ChannelManager::set_paths_to_static_invoice_server`]: crate::ln::channelmanager::ChannelManager::set_paths_to_static_invoice_server | ||
#[cfg(async_payments)] | ||
PersistStaticInvoice { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I assume there has been prior discussion around this that I'm unware of, but it strikes me as a bit odd to manage persistence via the event queue, if we have KVStore
trait readily available in the crate.
Pushing/popping the event will result in at least two additional ChannelManager
repersists. Assuming that ~all users run this against the same persistence backend, I wonder why/if this approach is preferable in any way. I think @joostjager brought up a similar question above.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is similar to the approach that was taken in #2973, though unfortunately I can't find a link to any discussion that took place landing on that approach...
But, either way, I think the server needs to verify the recipient_id
and rate limit them in their business logic before any persistence takes place such as in KVStore
. I agree the events are weird but it feels like rate limiting and persistence is something that should be handed in the layer that actually manages the database itself. Let me know what you think.
Pushing/popping the event will result in at least two additional ChannelManager repersists
Hopefully @joostjager and I can get rid of ChannelManager
persistence entirely after this feature is done 😅
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hopefully @joostjager and I can get rid of
ChannelManager
persistence entirely after this feature is done 😅
Hmm, but even then we'd repersist the event queue twice, no? Granted, it would be much less overhead, but we'd still eat the IO latency once or twice before actually getting to persist the data (incurring more IO latency).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The link to KVStore
needs to be made at some point anyway to get a working server. If we don't do it in LDK, the work is just shifted to LDK-node? Then if we do it here and can cut out the event queue and event queue persistence, the overall amount of work might be less.
I do see some intersection with #3778. The events as proposed in this PR would allow for async persistence, but it is yet again another way to do it alongside a native async interface and the InProgress/callback mechanism in chain monitor.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, but even then we'd repersist the event queue twice, no? Granted, it would be much less overhead, but we'd still eat the IO latency once or twice before actually getting to persist the data (incurring more IO latency).
Hmm... Thoughts on a separate PR that would allow us to process the event queue and skip manager persistence entirely for for certain events? With these events and several others I can think of, no manager persistence is needed and an event being dropped/failed to handle/redundantly handled should not be a problem.
I guess I'm just not sure how we can properly rate limit without knowing anything about the underlying database and its limitations. It feels like an instance where finer-grained control might be desirable and rust-lightning
is the wrong layer, unlike with the current uses of KVStore
where all channel updates are definitely necessary to persist. Also, I still think users need to authenticate recipient_id
s in their business logic before any persistence takes place.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Assuming almost all users would use ldk-node, it doesn't matter so much in which layer KVStore
is accessed. To me it seems fine to do it in ldk-node. I am just not very happy with the event mechanism used here. I think a StaticInvoicePersister
trait would be simpler, faster (no event persistence) and better encapsulated (not part of a pile of events of all types).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Assuming almost all users would use ldk-node, it doesn't matter so much in which layer
KVStore
is accessed. To me it seems fine to do it in ldk-node. I am just not very happy with the event mechanism used here. I think aStaticInvoicePersister
trait would be simpler, faster (no event persistence) and better encapsulated (not part of a pile of events of all types).
I agree with this sentiment: generally I don't have too much of a strong opinion on how we do it, but (ab)using the persisted event queue for persistence calls seems like unnecessary overhead, IMO.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As mentioned, I think we can avoid persisting the event queue when these events are present specifically, as well as existing non-critical events that are in the same boat like ConnectionNeeded
, OnionMessagePeerConnected
.
But, a separate trait is another approach that has been discussed in the past that we can definitely revisit, will start an offline discussion.
Edit: discussed offline, all the options have downsides so going with the event approach for now. A separate trait would mean blocking other onion messenger operations on persistence.
2dc0bc6
to
685a364
Compare
Addressed feedback. Diff I think the main thing outstanding to discuss is whether the event-based approach makes sense conceptually or whether more work should be shifted into LDK and we should persist directly to the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
besides having the same questions related on persistence and OM interface - even thought, that having the OM interface is a step forward on extending LSP - and not having much context on previous steps, seems good!
I just think it is missing one test related to the implementation doc note:
"Note that the server is not required to be a channel counterparty".
I played with the new tests a bit and seems to work using a counterparty without channels 👇🏼
test code
#[test]
fn server_does_not_share_channel() {
// Test that if the server does not share a channel with the recipient, the recipient still
// receives the offer_paths message and can build a new offer.
let chanmon_cfgs = create_chanmon_cfgs(4);
let node_cfgs = create_node_cfgs(4, &chanmon_cfgs);
let node_chanmgrs = create_node_chanmgrs(4, &node_cfgs, &[None, None, None, None]);
let nodes = create_network(4, &node_cfgs, &node_chanmgrs);
create_announced_chan_between_nodes_with_value(&nodes, 0, 1, 1_000_000, 0);
create_announced_chan_between_nodes_with_value(&nodes, 1, 2, 1_000_000, 0);
create_unannounced_chan_between_nodes_with_value(&nodes, 2, 3, 1_000_000, 0);
//
// sender <--announced--> server <--announced--> router <--unannounced--> recipient
// I think this case recipient must know routes to the server.
// This won't work because we do not support forwarding yet (?).
// So we are not sending the payment
let sender = &nodes[0];
let server = &nodes[1];
let recipient = &nodes[3];
let recipient_id = vec![42; 32];
let inv_server_paths =
server.node.blinded_paths_for_async_recipient(recipient_id.clone(), None).unwrap();
recipient.node.set_paths_to_static_invoice_server(inv_server_paths).unwrap();
let invoice_flow_res =
pass_static_invoice_server_messages(server, recipient, recipient_id.clone());
let static_invoice = invoice_flow_res.invoice;
assert!(static_invoice.invoice_features().supports_basic_mpp());
let offer = recipient.node.get_async_receive_offer().unwrap();
let amt_msat = 5000;
let payment_id = PaymentId([1; 32]);
let params = RouteParametersConfig::default();
sender
.node
.pay_for_offer(&offer, None, Some(amt_msat), None, payment_id, Retry::Attempts(0), params)
.unwrap();
let release_held_htlc_om = pass_async_payments_oms(
static_invoice.clone(),
sender,
server,
recipient,
recipient_id,
)
.1;
sender
.onion_messenger
.handle_onion_message(recipient.node.get_our_node_id(), &release_held_htlc_om);
// Check that we've queued the HTLCs of the async keysend payment.
let mut events = sender.node.get_and_clear_pending_msg_events();
assert_eq!(events.len(), 1);
let ev = remove_first_msg_event_to_node(&server.node.get_our_node_id(), &mut events);
let payment_hash = extract_payment_hash(&ev);
check_added_monitors!(sender, 1);
// Receiving a duplicate release_htlc message doesn't result in duplicate payment.
sender
.onion_messenger
.handle_onion_message(recipient.node.get_our_node_id(), &release_held_htlc_om);
assert!(sender.node.get_and_clear_pending_msg_events().is_empty());
685a364
to
5bc8015
Compare
Tests should be updated, I think I still have a comment thread or two to reply to but this should be ready for another look. @a-mpch Thanks for the test! I'm going to modify it so the server has no channels at all but will incorporate otherwise. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did another pass. Not so much to say about this PR. It really is mostly boiler plate / threading through.
@@ -194,6 +194,16 @@ const OFFER_REFRESH_THRESHOLD: Duration = Duration::from_secs(2 * 60 * 60); | |||
#[cfg(async_payments)] | |||
const MIN_OFFER_PATHS_RELATIVE_EXPIRY_SECS: u64 = 3 * 30 * 24 * 60 * 60; | |||
|
|||
#[cfg(all(test, async_payments))] | |||
pub(crate) const TEST_MAX_CACHED_OFFERS_TARGET: usize = MAX_CACHED_OFFERS_TARGET; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't this be near the tests?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think that would work since the const is private to this file?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You could have exposed the main const only for tests, but it would still require duplication. Maybe worse, because the constant value would need to be repeated?
pub(crate) always may not have been such a problem either I think.
Either way, non blocking.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have yet to take a more detailed look at the test code.
Otherwise looks good, I mostly still have one question regarding the persistence scheme and invoice_slot
.
/// | ||
/// When this invoice and its metadata are persisted, this slot number should be included so if | ||
/// we receive another [`Event::PersistStaticInvoice`] containing the same slot number we can | ||
/// swap the existing invoice out for the new one. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So what would this mean concretely for our persistence scheme?
For example, how would we build our key
if we persist against a key-value store? I guess to make sure we'd only ever persist only so many slots per recipient, it would mean that we'd do {recipient_id}##{invoice_slot}
as a classical key (or use secondary_namespace: recipient_id
, key: invoice_slot
for KVStore
). But how could we then query the specific invoice if we receive StaticInvoiceRequested
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Discussed with @TheBlueMatt, seems our options are:
-
key
ofrecipient_id
, value is a blob which is the recipient's list of invoices and metadata. So on invoice update we'd pull the list, modify it, and write it out again. OnStaticInvoiceRequested
we'd also pull the full list. This is likely to be resilient up to ~50KiB blob per user, which is probably like 40-60 invoices (currently we limit to 10). -
secondary_namespace
ofrecipient_id
,key
ofinvoice_id
. This makes it easy atStaticInvoiceRequested
time. OnServeStaticInvoice
, we would either write out a new key if theinvoice_id
doesn't exist yet, or replace the current value at the existinginvoice_id
. Then either we iterate through all of the recipient's invoices to remove the replaced one, or live with duplicate slot numbers for a bit and sweep the replaced invoices in the background. This is probably the more performant/scalable option, a bit more complicated though.
We can also start with the simpler option 1 and migrate to option 2 when necessary.
Thoughts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, both approaches could make sense. In LDK Node we'd probably either store the static invoice as part of the 'payment metadata' store (tba.), or (more likely) create a new store implementation based on our DataStore
abstraction (see https://github.com/lightningdevkit/ldk-node/blob/main/src/data_store.rs).
However, I'm still a bit dubious how invoice_slot
would get into the mix. Wouldn't it simply be the responsibility of whoever implements the storage to make they only store so and so much invoices per recipient id, and which scheme they employ for that (e.g. round-robin) would be left as an exercise for them? Do we really need invoice_slot
, or can it simply be dropped from the API?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, I was originally thinking this invoice persistence logic would all live in LDK Node, but Matt suggested we have a separate struct/crate in rust-lightning
that is parameterized by the KVStore
and handles all the invoice slot replacement automatically. This new struct would basically have two methods, persist_invoice
and get_invoice
, and LDK Node/other RL users would call into it as part of handling the new events. Without knowing much about how LDK Node persistence works, would that make sense to you?
Regarding dropping the invoice_slot
-- IIUC, the recipient needs to be the one to tell the server which invoices are rotate-able, since the server doesn't know which offers the recipient has e.g. hardcoded onto their website already.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, I was originally thinking this invoice persistence logic would all live in LDK Node, but Matt suggested we have a separate struct/crate in
rust-lightning
that is parameterized by theKVStore
and handles all the invoice slot replacement automatically. This new struct would basically have two methods,persist_invoice
andget_invoice
, and LDK Node/other RL users would call into it as part of handling the new events. Without knowing much about how LDK Node persistence works, would that make sense to you?
Sure, that makes sense to me, although note that this new struct would of course need to support dual async/sync persistence going forward, essentially post-#3905. Probably would make sense to sync with @joostjager on what a future-proof API would look like if you start building a new one right now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Future proof would be to parameterize using the async kvstore. Then we have wrappers available to make it work for sync users. With the event approach chosen in this PR, I think we have enough flexibility and not require an additional asyncification pass.
5bc8015
to
f24da56
Compare
As part of serving static invoices to payers on behalf of often-offline recipients, these recipients need a way to contact the static invoice server to retrieve blinded paths to include in their offers. Add a utility to create blinded paths for this purpose as a static invoice server. The recipient will be configured with the resulting paths and use them to request offer paths on startup.
As part of serving static invoices to payers on behalf of often-offline recipients, we need to provide the async recipient with blinded message paths to include in their offers. Support responding to inbound requests for offer paths from async recipients.
As part of serving static invoices to payers on behalf of often-offline recipients, the recipient will send us the final static invoice once it's done being interactively built. We will then persist this invoice and confirm to them that the corresponding offer is ready to be used for async payments. Surface an event once the invoice is received and expose an API to tell the recipient that it's ready for payments.
Here we implement serving static invoices to payers on behalf of often-offline recipients. These recipients previously encoded blinded paths terminating at our node in their offer, so we receive invoice requests on their behalf. Handle those inbound invreqs by retrieving a static invoice we previously persisted on behalf of the payee, and forward it to the payer as a reply to their invreq.
We're about to add a bunch more async payments tests, so take this opportunity to clean up the existing tests by no longer hardcoding the keysend payment preimage bytes ahead of time. This previously caused an MPP test to spuriously fail because all the session_privs were the same, and is generally not ideal. Also add a few comments to an existing test and a few more trivial cleanups.
Previously, the test-only method OnionMessenger::release_pending_msgs would not release a message if it was pending in e.g. the OffersMessageHandler but not in the messenger's own queue. Update this to pull pending messages from all handlers before releasing pending messages.
We were manually creating the static invoice in tests, but now we can use the static invoice server protocol to interactively build the invoice.
f24da56
to
9288153
Compare
&self, recipient_id: Vec<u8>, relative_expiry: Option<Duration>, | ||
peers: Vec<MessageForwardNode>, | ||
) -> Result<Vec<BlindedMessagePath>, ()> { | ||
if recipient_id.len() > 1024 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Introduce constant?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It feels like overkill to have a public constant for this case to me, are you okay with a const that's internal to the method?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's non blocking. A (local) const with a comment could be nice to explain the 1024.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This PR looks good to me. As mentioned earlier, some 90% or so is threading through and boilerplate. When the follow-up adds the actual write to disk, it would be worth taking another close look at that path to assess how it might be vulnerable to abuse.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
As part of supporting async payments, we want to support serving static invoices on behalf of an often-offline payee, in response to invoice requests from payers.
See this doc for more details on the protocol. Here we implement the server-side of the linked protocol.
Based on #3618